The majority of malignancies that originate in the CNS occur in the brain. Annually, almost 11,700 people are told they have a brain tumour. About 34% of men and 36% of women survive 5 years after being diagnosed with a brain or central nervous system tumour.
Benign tumours, malignant tumours, pituitary tumours, and so on are all different types of brain tumours. A patient\'s life expectancy can be increased via careful diagnosis, treatment, and planning. Magnetic resonance imaging (MRI) is the gold standard for diagnosing tumours in the brain.
Scanning generates a massive amount of data in the form of images. A radiologist is the professional who looks at these scans. Manual examinations are unreliable because brain tumours are complex. ML and AI-based classification approaches outperform human-based methods.
Thus, a global detection and classification system using deep learning techniques like CNN, ANN, and TL will help clinicians worldwide.
Introduction
I. INTRODUCTION
The signs and symptoms of a brain tumour are highly variable and contextual. This occurs when the tumour presses on and stresses the cells around it.
When brain fluid is blocked by a tumour, the same thing happens. Common signs and symptoms include trouble walking and balancing, as well as headaches, nausea, and vomiting.
Brain tumours can be detected using diagnostic imaging techniques CT and MRI Locations vary, tested and the goals of the investigation, favourable results can be found for either.
Because MRI images are straightforward to interpret and allow for precise localisation of calcifications and foreign substances, they were the modality of choice for this paper. Convolutional neural networks (CNNs) can automate the process of segmenting MRI brain scans. Our mission is to aid Nuerophysicians and other relevant specialists in accurately identifying the presence of tumour. Any device with access to the internet and a web browser can utilise this system, which accepts MRI images as input and returns useful information.
II. APPROACH
The initial stage is to retrieve the input picture from file, image processing, and categorization. At long last, the results will be shown.
Load an existing image file.
Pre-processing images
CNN-based Image Labelling
a result with tumour confirmation yes/no
Every module is one of a kind. That's why each and every move is crucial. Both test and training data are integrated into the design. About 253 photos were downloaded from Kaggle for system training and testing. Processed images train a Convolutional Neural Network Model.
a. Several Images: Kaggle provided 253 photos to train and test the system. They have tumours or don't. JPG, JPEG, and PNG can open the files. Image processing requires retrieving images from datasets. Processing cannot begin until an image is captured, hence it must come first in the pipeline. Raw, unprocessed image.
b. Pre-processing enhances brain MRI Images.: Shrink the image and remove unexpected sounds to pre-process. Create a grayscale brain MRI scan first. Improved diagnostic and classification precision
Change the colour space of an image. We are utilising the Pillow library (which is part of the PIL (Python Image Library)) to convert colour images to grayscale.
With Pillow's convert() method, you may transform images between numerous pixel formats. Alterations between "L" and "RGB" modes are also handled by the library. It's possible that a "RGB" image will be required for any mode conversions.
The Pillow library should be used to reduce the image size to 128x128.
The OneHotEncoder is used to assign a value of 0 or 1 based on the tumor's size and shape. Categorical data can be used in machine learning with the help of a procedure called one-hot encoding. "One-hot" encoding is used to convert categorical characteristics into binary features, where a 1 is assigned to a column if the feature is represented by that column. If not, it is assigned a value of 0.
.c. A CNN with convolutions or ConvNet: By employing the proper filters, a ConvNet captures image spatial and temporal relationships. The design fits picture datasets better due to fewer parameters and weight reusability. That is, the network can learn to distinguish image nuances. The ConvNet simplifies images without losing important forecasting data.
There are many levels to this approach:
Conv2D layers with 32 filters, a kernel size of (2, 2), and Same padding make up the first two layers. The first layer takes grayscale photos with an input form of (128, 128, 1).
The activations of the base layer are normalised by adding a BatchNormalization layer.
For max pooling and dimension reduction, we add a MaxPooling2D layer with a pool size of (2, 2).
To avoid overfitting, we have implemented a Dropout layer with a dropout rate of 0.25.
Two further sets of Conv2D layers, BatchNormalization, MaxPooling2D, and Dropout layers, follow the aforementioned pattern. Then, two more Conv2D layers are included, each with 128 filters and a kernel size of (3, 3). Dropout, MaxPooling2D, and BatchNormalization layers are repeated.
To get the output from the preceding layers ready for the fully connected layers, a Flatten layer is used. There are 512 units in the first completely connected (Dense) layer, which represents the output classes, and 2 units in the second layer.Before the final Dense layer, a Dropout layer is included with a dropout rate of 0.5. The sigmoid activation function is used in the final Dense layer to generate output class probabilities. RMSprop and the binary cross-entropy loss function complete the model. Model, summary() outputs a summary. CNNs use convolutional, pooling, and fully connected layers, this model adheres to. Dropout and batch normalisation are employed for regularisation, while the ReLU activation function is used to add non-linearity.
d. You Can get to it using a Web Browser: We made this so that it's accessible from any computer or mobile device with a web browser. The website accepts an image file as input and returns a textual report that, with a certain degree of accuracy, indicates whether or not a tumour is present. Since Anvil supports full stack development for Machine Learning, it was chosen for the app's creation and deployment. Anvil is a robust Python module for rapidly developing complete web apps. It offers a straightforward drag-and-drop interface for UI design and a server-side Python environment for developing apps. Anvil handles all the backend work so you can concentrate on making your project.
Examining the numbers: Loss functions reduce prediction error the most. The best-fitting model can then be determined with this method. To achieve the best outcomes, we have employed RMSProp as an optimizer in this case. To pinpoint the failure, we ran 30 Epochs, each with a batch size of 40. By the conclusion of 30 Epochs, the loss had levelled out after initially being higher. The gradient decreases with increasing epochs, as shown in the Loss plot below for the first 30 epochs.
Mean Square Root The propagation loss is just the average of the losses experienced throughout training and testing
References
[1] A.Sivaramakrishnan And Dr.M.Karnan “A Novel Based Approach For Extraction Of BrainTumor In Mri Images Using Soft Computing Techniques,”International Journal Of Advanced Research In Computer And Communication Engineering, Vol. 2, Issue 4, April 2013.
[2] Asra Aslam, Ekram Khan, M.M. Sufyan Beg, Improved Edge Detection Algorithm for Brain Tumor Segmentation, Procedia Computer Science, Volume 58,2015, Pp 430-437, ISSN 1877-0509.
[3] B.Sathya and R.Manavalan, Image Segmentation by Clustering Methods: Performance Analysis, International Journal of Computer Applications (0975–8887) Volume 29– No.11, September 2011.
[4] Devkota, B. & Alsadoon, Abeer & Prasad, P.W.C. & Singh, A.K. & Elchouemi, (2018). Image Segmentation for Early Stage Brain Tumor Detection using Mathematical Morphological Reconstruction. Procedia Computer Science. 125.115-123. 10.1016/j.procs.2017.12.017.
[5] K. Sudharani, T. C. Sarma and K. Satya Rasad, \"Intelligent Brain Tumor lesion classification and identification from MRI images using k-NN technique,\" 2015 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), Kumaracoil, 2015, pp. 777-780. DOI: 10.1109/ICCICCT.2015.7475384
[6] Kaur, Jaskirat & Agrawal, Sunil & Renu, Vig. (2012). A Comparative Analysis of Thresholding and Edge Detection Segmentation Techniques. International Journal of Computer Applications.vol. 39.pp. 29-34. 10.5120/4898-7432.18
[7] Li, Shutao, JT-Y. Kwok, IW-H. Tsang and Yaonan Wang. \"Fusing images with different focuses using support vector machines.\" IEEE Transactions on neural networks 15, no. 6 (2004): 1555-1561.
[8] M. Kumar and K. K. Mehta, \"A Texture based Tumor detection and automatic Segmentation using Seeded Region Growing Method,\" International Journal of Computer Technology and Applications, ISSN: 2229-6093, Vol. 2, Issue 4, PP. 855-859 August 2011.
[9] Mahmoud, Dalia & Mohamed, Eltaher. (2012). Brain Tumor Detection Using Artificial Neural Networks. Journal of Science and Technology. 13. 31-39.
[10] Marroquin J.L., Vemuri B.C., Botello S., Calderon F. (2002) An Accurate and Efficient Bayesian Method for Automatic Segmentation of Brain MRI. In: Heyden A., Sparr G., Nielsen M., Johansen P. (eds) Computer Vision — ECCV 2002. ECCV 2002. Lecture Notes in Computer Science, vol 2353. Springer, Berlin, Heidelberg.
[11] Minz, Astina, and Chandrakant Mahobiya. “MR Image Classification Using Adaboost for Brain Tumor Type.” 2017 IEEE 7th International Advance Computing Conference (IACC) (2017): 701-705.
[12] Monica Subashini.M, Sarat Kumar Sahoo, “Brain MR Image Segmentation for TumorDetection using Artificial Neural Networks,” International Journal of Engineering and Technology (IJET), Vol.5, No 2, Apr-May 2013.
[13] P. Naga Srinivasu, G. Srinivas, T Srinivas Rao, (2016). ‘An Automated Brain MRI image segmentation using a Generic Algorithm and TLBO.’ International Journal of Control Theory and Applications, Vol: 9(32).
[14] P. Naga Srinivasu, T. Srinivasa Rao, Valentina Emilia Balas. (2020). A systematic approach for identification of tumor regions in the human brain through HARIS algorithm, Deep Learning Techniques for Biomedical and Health Informatics, Academic Press.Pages 97-118. https://doi.org/10.1016/B978-0-12-819061-6.00004-5.
[15] P.S. Mukambika, K Uma Rani, “Segmentation and Classification of MRI Brain Tumor,” International Research Journal of Engineering and Technology(IRJET), Vol.4, Issue 7, 2017, pp. 683 – 688, ISSN: 2395-0056